6 research outputs found

    Sensor fusion in smart camera networks for ambient intelligence

    Get PDF
    This short report introduces the topics of PhD research that was conducted on 2008-2013 and was defended on July 2013. The PhD thesis covers sensor fusion theory, gathers it into a framework with design rules for fusion-friendly design of vision networks, and elaborates on the rules through fusion experiments performed with four distinct applications of Ambient Intelligence

    Sensor fusion in smart camera networks for ambient Intelligence

    No full text
    This short report introduces the topics of PhD research that was conducted on 2008-2013 and was defended on July 2013. The PhD thesis covers sensor fusion theory, gathers it into a framework with design rules for fusion-friendly design of vision networks, and elaborates on the rules through fusion experiments performed with four distinct applications of Ambient Intelligence

    Sensor fusion in smart camera networks for ambient intelligence

    No full text

    Home-to-home communication using 3D shadows

    No full text
    In some visual communication applications it is not possible or even desired to aim at a photorealistic representation of the remote person. One possibility is to aim at stylized visual representations of remote persons, e.g., as avatars shown on a display device or as shadows in lighting. In this paper we introduce a system for persistent and ambient visual communication based on capture, transmission, and rendering of 3D shadow representations of users. The shape of a person is captured using a distributed camera array, compressed, and transmitted over the network. In the receiving end the shape is projected as a shadow on a surface using a lighting device. We demonstrate that the 3D representation of the shape makes it possible to control the 2D visualization at the receiving end in many interesting ways. For example, when controlled by tracking of the observing user the shadow may create a visual illusion of a 3D shape on the wall

    On efficient use of multi-view data for activity recognition

    No full text
    The focus of the paper is on studying ??ve di??erent meth- ods to combine multi-view data from an uncalibrated smart camera network for human activity recognition. The multi- view classi??cation scenarios studied can be divided to two categories: view selection and view fusion methods. Selec- tion uses a single view to classify, whereas fusion merges multi-view data either on the feature- or label-level. The ??ve methods are compared in the task of classifying human activities in three fully annotated datasets: MAS, VIHASI and HOMELAB, and a combination dataset MAS+VIHASI. Classi??cation is performed based on image features com- puted from silhouette images with a binary tree structured classi??er using 1D CRF for temporal modeling. The results presented in the paper show that fusion methods outper- form practical selection methods. Selection methods have their advantages, but they strongly depend on how good of a selection criteria is used, and how well this criteria adapts to di??erent environments. Furthermore, fusion of features outperforms other scenarios within more controlled settings. But the more variability exists in camera placement and characteristics of persons, the more likely improved accu- racy in multi-view activity recognition can be achieved by combining candidate label

    A framework for providing ergonomic feedback using smart cameras.

    No full text
    The importance of proper ergonomics for the health and wellbeing of office workers is being increasingly promoted by federal agencies, such as the OSHA (Occupational Safety and Health Administration) and the NIOSH (National Institute for Occupational Safety and Health), which provide guidelines for improving workplace ergonomics. However, it is up to the individual workers to ensure that they are adhering to the proper ergonomic practices in their office environment. In this work, we describe a collaborative framework that leverages a computer's webcam and cameras in the workplace to provide feedback relating to a worker's current ergonomic state. We use a webcam to locate a worker's face using a 6 Degree of Freedom (DoF) face tracking algorithm, and use the output to calculate useful parameters, such as a worker's average work and break periods, the distance between a worker and the computer monitor, and a measure of a worker's head motion. Cameras installed in the office provide additional information, such as posture and social interaction, that might not be captured by the front facing webcam. We use webcams since they are unobtrusive, and are frequently built into laptops and computer monitors. Additionally, by leveraging a software face tracker, we can avoid the use of expensive sensing systems that are typically needed for gaze and face tracking. We describe the methods that are used to discern ergonomic parameters from the face tracking data, and how the network of cameras collaborate to provide ergonomic feedback to the workers. Experimental results show that our system provides personalized recommendations that promote workplace wellbeing without sacrificing productivity
    corecore